10 research outputs found

    Advantages of Unfair Quantum Ground-State Sampling

    Get PDF
    The debate around the potential superiority of quantum annealers over their classical counterparts has been ongoing since the inception of the field by Kadowaki and Nishimori close to two decades ago. Recent technological breakthroughs in the field, which have led to the manufacture of experimental prototypes of quantum annealing optimizers with sizes approaching the practical regime, have reignited this discussion. However, the demonstration of quantum annealing speedups remains to this day an elusive albeit coveted goal. Here, we examine the power of quantum annealers to provide a different type of quantum enhancement of practical relevance, namely, their ability to serve as useful samplers from the ground-state manifolds of combinatorial optimization problems. We study, both numerically by simulating ideal stoquastic and non-stoquastic quantum annealing processes, and experimentally, using a commercially available quantum annealing processor, the ability of quantum annealers to sample the ground-states of spin glasses differently than classical thermal samplers. We demonstrate that i) quantum annealers in general sample the ground-state manifolds of spin glasses very differently than thermal optimizers, ii) the nature of the quantum fluctuations driving the annealing process has a decisive effect on the final distribution over ground-states, and iii) the experimental quantum annealer samples ground-state manifolds significantly differently than thermal and ideal quantum annealers. We illustrate how quantum annealers may serve as powerful tools when complementing standard sampling algorithms.Comment: 13 pages, 11 figure

    Distributed and Interactive Simulations Operating at Large Scale for Transcontinental Experimentation

    Get PDF
    This paper addresses the use of emerging technologies to respond to the increasing needs for larger and more sophisticated agent-based simulations of urban areas. The U.S. Joint Forces Command has found it useful to seek out and apply technologies largely developed for academic research in the physical sciences. The use of these techniques in transcontinentally distributed, interactive experimentation has been shown to be effective and stable and the analyses of the data find parallels in the behavioral sciences. The authors relate their decade and a half experience in implementing high performance computing hardware, software and user inter-face architectures. These have enabled heretofore unachievable results. They focus on three advances: the use of general purpose graphics processing units as computing accelerators, the efficiencies derived from implementing interest managed routers in distributed systems, and the benefits of effective data management for the voluminous information

    Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2005 Simulation Data Grid: Joint Experimentation Data Management and Analysis

    No full text
    The need to present quantifiable results from simulations to support transformational findings is driving the creation of very large and geographically dispersed data collections. The Joint Experimentation Directorate (J9) of USJFCOM and the Joint Advanced Warfighting Project is conducting a series of Urban Resolve experiments to investigate concepts for applying future technologies to joint urban warfare. The recently concluded phase I of the experiment utilized and integrated multiple scalable parallel processors (SPP) sites distributed across the United States from supercomputing centers at Maui and at Wright-Patterson to J9 at Norfolk, Virginia. This computational power is required to model futuristic sensor technology and the complexity of urban environments. For phase I the simulation generated more than two terabytes of raw data at rate of>10GB per hour. The size and distributed nature of this type of data collection pose significant challenges in developing the corresponding data-intensive applications that manage and analyze them. Building on lessons learned in developing data management tools for Urban Resolve, we present our next generation data management and analysis tool, called Simulation Data Grid (SDG). The design principles driving the design of SDG are 1) minimize network communication overhead (especially across SPPs) by storing data near the point of generation and only selectively propagating the data as needed, and 2) maximize the use of SPP computational resources and storage by distributing analyses across SPP sites to reduce, filter and aggregate. Our key implementation principle is to leverage existing open standards and infrastructure from Grid Computing. We show how our services interface and build on top o

    Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) 2007 Implementing a GPU-Enhanced Cluster for Large-Scale Simulations

    No full text
    The simulation community has often been hampered by constraints in computing: not enough resolution, not enough entities, not enough behavioral variants. Higher performance computers can ameliorate those constraints. The use of Linux Clusters is one path to higher performance; the use of Graphics Processing Units (GPU) as accelerators is another. Merging the two paths holds even more promise. The authors were the principal architects of a successful proposal to the High Performance Computing Modernization Program (HPCMP) for a new 512 CPU (1024 core), GPU-enhanced Linux Cluster for the Joint Forces Command’s Joint Experimentation Directorate (J9). In this paper, the basic theories underlying the use of GPUs as accelerators for intelligent agent, entity-level simulations are laid out, the previous research is surveyed and the ongoing efforts are outlined. The simulation needs of J9, the direction from HPCMP and the careful analysis of the intersection of these are explicitly discussed. The configuration of the cluster and the assumptions that led to the conclusion that GPUs might increase performance by a factor of two are carefully documented. The processes that led to that configuration, as delivered to JFCOM, will be specified and alternatives that were considered will be analyzed. Planning and implementation strategies are reviewed and justified. The presentation will then report in detail about the execution of the actual installation and implementation of the JSAF simulation on the cluster in August 2007. Issues, problems and solutions will all be reported objectively

    Enabling 1,000,000-entity simulations on distributed Linux clusters

    No full text
    The Information Sciences Institute and Caltech are enabling USJFCOM and the Institute for Defense Analyses to conduct entity-level simulation experiments using hundreds of distributed computer nodes on Linux Clusters as a vehicle for simulating millions of JSAF entities. Included below is the experience with the design and implementation of the code that increased scalability, thereby enabling two orders of magnitude growth and the effective use of DoD high-end computers. A typical JSAF experiment generates several terabytes of logged data, which is queried in near-real-time and for months afterward. The amount of logged data and the desired database query performance mandated the redesign of the original logger system's monolithic database, making it distributed and incorporating several advanced concepts. System procedures and practices were established to reliably execute the global-scale simulations, effectively operate the distributed computers, efficiently process and store terabytes of data, and provide straightforward access to the data by analysts

    Enabling 1,000,000-entity simulations on distributed Linux clusters

    No full text
    The Information Sciences Institute and Caltech are enabling USJFCOM and the Institute for Defense Analyses to conduct entity-level simulation experiments using hundreds of distributed computer nodes on Linux Clusters as a vehicle for simulating millions of JSAF entities. Included below is the experience with the design and implementation of the code that increased scalability, thereby enabling two orders of magnitude growth and the effective use of DoD high-end computers. A typical JSAF experiment generates several terabytes of logged data, which is queried in near-real-time and for months afterward. The amount of logged data and the desired database query performance mandated the redesign of the original logger system's monolithic database, making it distributed and incorporating several advanced concepts. System procedures and practices were established to reliably execute the global-scale simulations, effectively operate the distributed computers, efficiently process and store terabytes of data, and provide straightforward access to the data by analysts

    IEEE/ACM Distributed Simulation and Real Time Applications 2010 DISTRIBUTED AND INTERACTIVE SIMULATIONS OPERATING AT LARGE SCALE FOR TRANSCONTINENTAL EXPERIMENTATION

    No full text
    ABSTRACT This paper addresses the use of emerging technologies to respond to the increasing needs fo

    Bibliography

    No full text
    corecore